Stretching Deep Architectures: A Deep Learning Method without Back-Propagation Optimization
نویسندگان
چکیده
In recent years, researchers have proposed many deep learning algorithms for data representation learning. However, most networks require extensive training and a lot of time to obtain good results. this paper, we propose novel method based on stretching architectures that are composed stacked feature models. Hence, the is called “stretching architectures” (SDA). feedforward propagation SDA, models firstly learned layer by layer, then technique applied map last features high-dimensional space. Since optimized effectively, can be easily calculated, SDA very fast. More importantly, does not need back-propagation optimization, which quite different from existing We tested in visual texture perception, handwritten text recognition, natural image classification applications. Extensive experiments demonstrate advantages over traditional related
منابع مشابه
A Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملA Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملLearning by Stretching Deep Networks
In recent years, deep architectures have gained a lot of prominence for learning complex AI tasks because of their capability to incorporate complex variations in data within the model. However, these models often need to be trained for a long time in order to obtain good results. In this paper, we propose a technique, called ‘stretching’, that allows the same models to perform considerably bet...
متن کاملLayer multiplexing FPGA implementation for deep back-propagation learning
Training of large scale neural networks, like those used nowadays in Deep Learning schemes, requires long computational times or the use of high performance computation solutions like those based on cluster computation, GPU boards, etc. As a possible alternative, in this work the Back-Propagation learning algorithm is implemented in an FPGA board using a multiplexing layer scheme, in which a si...
متن کاملTraining Simplification and Model Simplification for Deep Learning: A Minimal Effort Back Propagation Method
We propose a simple yet effective technique to simplify the training and the resulting model of neural networks. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-k elements (in terms of magnitude) are kept. As a result, only k rows or columns (depending on the layout) of ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronics
سال: 2023
ISSN: ['2079-9292']
DOI: https://doi.org/10.3390/electronics12071537